Virtual odometry from visual flow
نویسندگان
چکیده
We investigate how visual motion registered during one’s own movement through a structured world can be used to gauge travel distance. Estimating absolute travel distance from the visual flow induced in the optic array of a moving observer is problematic because optic flow speeds co-vary with the dimensions of the environment and are thus subject to an environment specific scale factor. Discrimination of the distances of two simulated self-motions of different speed and duration is reliably possible from optic flow, however, if the visual environment is the same for both motions, because the scale factors cancel in this case. 2 Here, we ask whether a distance estimate obtained from optic flow can be transformed into a spatial interval in the same visual environment. Subjects viewed a simulated self-motion sequence on a large (90 by 90 deg) projection screen or in a computer animated virtual environment (CAVE) with completely immersive, stereographic, head-yoked projection, that extended 180deg horizontally and included the floor space in front of the observer. The sequence depicted selfmotion over a ground plane covered with random dots. Simulated distances ranged from 1.5 to 13 meters with variable speed and duration of the movement. After the movement stopped, the screen depicted a stationary view of the scene and two horizontal lines appeared on the ground in front of the observer. The subject had to adjust one of these lines such that the spatial interval between the lines matched the distance traveled during the movement simulation. Adjusted interval size was linearly related to simulated travel distance, suggesting that observers could obtain a measure of distance from the optic flow. The slope of the regression was 0.7. Thus, subjects underestimated distance by 30%. This result was similar for stereoscopic and monoscopic conditions. We conclude that optic flow can be used to derive an estimate of travel distance, but this estimate is subject to scaling when compared to static intervals in the environment, irrespective of steroscopic depth cues.
منابع مشابه
Learning monocular visual odometry with dense 3D mapping from dense 3D flow
This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow...
متن کاملAdaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applic...
متن کاملVisual Odometry Using Commodity Optical Flow
A wide variety of techniques for visual navigation using robot-mounted cameras have been described over the past several decades, yet adoption of optical flow navigation techniques has been slow. This demo illustrates what visual navigation has to offer: robust hazard detection (including precipices and obstacles), high-accuracy open-loop odometry, and stable closed-loop motion control implemen...
متن کاملMinimizing Image Reprojection Error For Relative Scale Estimation In Visual Odometry
In this paper we address the problem of visual motion estimation (visual odometry) from a single vehicle mounted camera. One of the basic issues of visual odometry is relative scale estimation. We propose a method to compute locally optimal relative scales by minimizing the reprojection error using windowed bundle adjustment. We introduce a minimal parameterization of the bundle adjustment prob...
متن کاملRobust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous
This paper describes extensions to the Kintinuous [1] algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused realtime surface coloring. These extensions are val...
متن کامل